Goto

Collaborating Authors

 distance-based regressor


Gradient Weights help Nonparametric Regressors

Neural Information Processing Systems

We show that weighting each coordinate i with the estimated norm of the ith derivative of f is an efficient way to significantly improve the performance of distance-based regressors, e.g.


Gradient Weights help Nonparametric Regressors

Kpotufe, Samory, Boularias, Abdeslam

Neural Information Processing Systems

In regression problems over $\real^d$, the unknown function $f$ often varies more in some coordinates than in others. We show that weighting each coordinate $i$ with the estimated norm of the $i$th derivative of $f$ is an efficient way to significantly improve the performance of distance-based regressors, e.g. kernel and $k$-NN regressors. We propose a simple estimator of these derivative norms and prove its consistency. Moreover, the proposed estimator is efficiently learned online.